Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
The core of self-supervised learning for pre-training language models includes pre-training task design as well as appropriate data augmentation. Most data augmentations in language model pre-training are context-independent. A seminal contextualized augmentation was recently proposed in ELECTRA and achieved state-of-the-art performance by introducing an auxiliary generation network (generator) to produce contextualized data augmentation for the training of a main discrimination network (discriminator). This design, however, introduces extra computation cost of the genera- tor and a need to adjust the relative capability between the generator and the discriminator. In this paper, we propose a self-augmentation strategy (SAS) where a single network is utilized for both regular pre-training and contextualized data augmentation for the training in later epochs. Essentially, this strategy eliminates a separate generator and uses the single network to jointly conduct two pre-training tasks with MLM (Masked Language Modeling) and RTD (Replaced Token Detection) heads. It avoids the challenge to search for an appropriate size of the generator, which is critical to the performance as evidenced in ELECTRA and its subsequent variant models. In addition, SAS is a general strategy that can be seamlessly combined with many new techniques emerging recently or in the future, such as the disentangled attention mechanism from DeBERTa. Our experiments show that SAS outperforms ELECTRA and other state-of-the-art models in the GLUE tasks with similar or less computation cost.more » « less
-
Bayesian neural networks are powerful inference methods by accounting for randomness in the data and the network model. Uncertainty quantification at the output of neural networks is critical, especially for applications such as autonomous driving and hazardous weather forecasting. However, approaches for theoretical analysis of Bayesian neural networks remain limited. This paper makes a step forward towards mathematical quantification of uncertainty in neural network models and proposes a cubature-rule-based computationally efficient uncertainty quantification approach that captures layerwise uncertainties of Bayesian neural networks. The proposed approach approximates the first two moments of the posterior distribution of the parameters by propagating cubature points across the network nonlinearities. Simulation results show that the proposed approach can achieve more diverse layer-wise uncertainty quantification results of neural networks with a fast convergence rate.more » « less
An official website of the United States government

Full Text Available